Conference Proceedings
Balancing out Bias: Achieving Fairness Through Balanced Training
X Han, T Baldwin, T Cohn
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 | Association for Computational Linguistics | Published : 2022
Abstract
Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups. Dataset balancing has been shown to be effective at mitigating bias, however existing approaches do not directly account for correlations between author demographics and linguistic variables, limiting their effectiveness. To achieve Equal Opportunity fairness, such as equal job opportunity without regard to demographics, this paper introduces a simple, but highly effective, objective for countering bias using balanced training. We extend the method in the form of a gated model, which incorporates prot..
View full abstractGrants
Awarded by Australian Research Council